7 research outputs found

    Reliable Machine Learning Model for IIoT Botnet Detection

    Get PDF
    Due to the growing number of Internet of Things (IoT) devices, network attacks like denial of service (DoS) and floods are rising for security and reliability issues. As a result of these attacks, IoT devices suffer from denial of service and network disruption. Researchers have implemented different techniques to identify attacks aimed at vulnerable Internet of Things (IoT) devices. In this study, we propose a novel features selection algorithm FGOA-kNN based on a hybrid filter and wrapper selection approaches to select the most relevant features. The novel approach integrated with clustering rank the features and then applies the Grasshopper algorithm (GOA) to minimize the top-ranked features. Moreover, a proposed algorithm, IHHO, selects and adapts the neural network’s hyper parameters to detect botnets efficiently. The proposed Harris Hawks algorithm is enhanced with three improvements to improve the global search process for optimal solutions. To tackle the problem of population diversity, a chaotic map function is utilized for initialization. The escape energy of hawks is updated with a new nonlinear formula to avoid the local minima and better balance between exploration and exploitation. Furthermore, the exploitation phase of HHO is enhanced using a new elite operator ROBL. The proposed model combines unsupervised, clustering, and supervised approaches to detect intrusion behaviors. The N-BaIoT dataset is utilized to validate the proposed model. Many recent techniques were used to assess and compare the proposed model’s performance. The result demonstrates that the proposed model is better than other variations at detecting multiclass botnet attacks

    Multi-Label Active Learning-Based Machine Learning Model for Heart Disease Prediction

    No full text
    The rapid growth and adaptation of medical information to identify significant health trends and help with timely preventive care have been recent hallmarks of the modern healthcare data system. Heart disease is the deadliest condition in the developed world. Cardiovascular disease and its complications, including dementia, can be averted with early detection. Further research in this area is needed to prevent strokes and heart attacks. An optimal machine learning model can help achieve this goal with a wealth of healthcare data on heart disease. Heart disease can be predicted and diagnosed using machine-learning-based systems. Active learning (AL) methods improve classification quality by incorporating user–expert feedback with sparsely labelled data. In this paper, five (MMC, Random, Adaptive, QUIRE, and AUDI) selection strategies for multi-label active learning were applied and used for reducing labelling costs by iteratively selecting the most relevant data to query their labels. The selection methods with a label ranking classifier have hyperparameters optimized by a grid search to implement predictive modelling in each scenario for the heart disease dataset. Experimental evaluation includes accuracy and F-score with/without hyperparameter optimization. Results show that the generalization of the learning model beyond the existing data for the optimized label ranking model uses the selection method versus others due to accuracy. However, the selection method was highlighted in regards to the F-score using optimized settings

    Multi-Label Active Learning-Based Machine Learning Model for Heart Disease Prediction

    No full text
    The rapid growth and adaptation of medical information to identify significant health trends and help with timely preventive care have been recent hallmarks of the modern healthcare data system. Heart disease is the deadliest condition in the developed world. Cardiovascular disease and its complications, including dementia, can be averted with early detection. Further research in this area is needed to prevent strokes and heart attacks. An optimal machine learning model can help achieve this goal with a wealth of healthcare data on heart disease. Heart disease can be predicted and diagnosed using machine-learning-based systems. Active learning (AL) methods improve classification quality by incorporating user–expert feedback with sparsely labelled data. In this paper, five (MMC, Random, Adaptive, QUIRE, and AUDI) selection strategies for multi-label active learning were applied and used for reducing labelling costs by iteratively selecting the most relevant data to query their labels. The selection methods with a label ranking classifier have hyperparameters optimized by a grid search to implement predictive modelling in each scenario for the heart disease dataset. Experimental evaluation includes accuracy and F-score with/without hyperparameter optimization. Results show that the generalization of the learning model beyond the existing data for the optimized label ranking model uses the selection method versus others due to accuracy. However, the selection method was highlighted in regards to the F-score using optimized settings

    Deep Learning-Based Model for Financial Distress Prediction

    Get PDF
    Predicting bankruptcies and assessing credit risk are two of the most pressing issues in finance. Therefore, financial distress prediction and credit scoring remain hot research topics in the finance sector. Earlier studies have focused on the design of statistical approaches and machine learning models to predict a company’s financial distress. In this study, an adaptive whale optimization algorithm with deep learning (AWOA-DL) technique is used to create a new financial distress prediction model. The goal of the AWOA-DL approach is to determine whether a company is experiencing financial distress or not. A deep neural network (DNN) model called multilayer perceptron based predictive and AWOA-based hyperparameter tuning processes are used in the AWOA-DL method. Primarily, the DNN model receives the financial data as input and predicts financial distress. In addition, the AWOA is applied to tune the DNN model’s hyperparameters, thereby raising the predictive outcome. The proposed model is applied in three stages: preprocessing, hyperparameter tuning using AWOA, and the prediction phase. A comprehensive simulation took place on four datasets, and the results pointed out the supremacy of the AWOA-DL method over other compared techniques by achieving an average accuracy of 95.8%, where the average accuracy equals 93.8%, 89.6%, 84.5%, and 78.2% for compared models

    Return Rate Prediction in Blockchain Financial Products Using Deep Learning

    No full text
    Recently, bitcoin-based blockchain technologies have received significant interest among investors. They have concentrated on the prediction of return and risk rates of the financial product. So, an automated tool to predict the return rate of bitcoin is needed for financial products. The recently designed machine learning and deep learning models pave the way for the return rate prediction process. In this aspect, this study develops an intelligent return rate predictive approach using deep learning for blockchain financial products (RRP-DLBFP). The proposed RRP-DLBFP technique involves designing a long short-term memory (LSTM) model for the predictive analysis of return rate. In addition, Adam optimizer is applied to optimally adjust the LSTM model’s hyperparameters, consequently increasing the predictive performance. The learning rate of the LSTM model is adjusted using the oppositional glowworm swarm optimization (OGSO) algorithm. The design of the OGSO algorithm to optimize the LSTM hyperparameters for bitcoin return rate prediction shows the novelty of the work. To ensure the supreme performance of the RRP-DLBFP technique, the Ethereum (ETH) return rate is chosen as the target, and the simulation results are investigated in different measures. The simulation outcomes highlighted the supremacy of the RRP-DLBFP technique over the current state of art techniques in terms of diverse evaluation parameters. For the MSE, the proposed RRP-DLBFP has 0.0435 and 0.0655 compared to an average of 0.6139 and 0.723 for compared methods in training and testing, respectively

    A Novel Tunicate Swarm Algorithm With Hybrid Deep Learning Enabled Attack Detection for Secure IoT Environment

    Get PDF
    The Internet of Things (IoT) paradigm has matured and expanded rapidly across many disciplines. Despite these advancements, IoT networks continue to face an increasing security threat as a result of the constant and rapid changes in the network environment. In order to address these vulnerabilities, the Fog system is equipped with a robust environment that provides additional tools to beef up data security. However, numerous attacks are persistently evolving in IoT and fog environments as a result of the development of several breaches. To improve the efficiency of intrusion detection in the Internet of Things (IoT), this research introduced a novel tunicate swarm algorithm that combines a long-short-term memory-recurrent neural network. The presented model accomplishes this goal by first undergoing data pre-processing to transform the input data into a usable format. Additionally, attacks in the IoT ecosystem can be identified using a model built on long-short-term memory recurrent neural networks. There is a strong correlation between the number of parameters and the model’s capability and complexity in ANN models. It is critical to keep track of the number of parameters in each model layer to avoid over- or under-fitting. One way to prevent this from happening is to modify the number of layers in your data structure. The tunicate swarm algorithm is used to fine-tune the hyper-parameter values in the Long Short-Term Memory-Recurrent Neural Network model to improve how well it can find things. TSA was used to solve several problems that couldn’t be solved with traditional optimization methods. It also improved performance and shortened the time it took for the algorithm to converge. A series of tests were done on benchmark datasets. Compared to related models, the proposed TSA-LSTMRNN model achieved 92.67, 87.11, and 98.73 for accuracy, recall, and precision, respectively, which indicate the superiority of the proposed model
    corecore